Black-Box α-Divergence Minimization: Supplementary

نویسندگان

  • José Miguel Hernández-Lobato
  • Yingzhen Li
  • Mark Rowland
  • Daniel Hernández-Lobato
  • Richard E. Turner
چکیده

This section revisits the original EP algorithm as a min-max optimization problem. Recall in the main text that we approximate the true posterior distribution p(θ|D) with a distribution in exponential family form given by q(θ) ∝ exp{s(θ)λq}. Now we define a set of unnormalized cavity distributions q\n(θ) = exp{s(θ)λ\n} for every data point xn. Then according to [Minka, 2001], the EP energy function is

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Black-Box α-Divergence Minimization

Black-box alpha (BB-α) is a new approximate inference method based on the minimization of α-divergences. BB-α scales to large datasets because it can be implemented using stochastic gradient descent. BB-α can be applied to complex probabilistic models with little effort since it only requires as input the likelihood function and its gradients. These gradients can be easily obtained using automa...

متن کامل

Black-box α-divergence for Deep Generative Models

We propose using the black-box α-divergence [1] as a flexible alternative to variational inference in deep generative models. By simply switching the objective function from the variational free-energy to the black-box α-divergence objective we are able to learn better generative models, which is demonstrated by a considerable improvement of the test log-likelihood in several preliminary experi...

متن کامل

Monte-Carlo SURE: A Black-Box Optimization of Regularization Parameters for General Denoising Algorithms - Supplementary Material

This material supplements some sections of the paper entitled “Monte-Carlo SURE: A Black-Box Optimization of Regularization Parameters for General Denoising Algorithms”. Here, we elaborate on the solution to the differentiability issue associated with the Monte-Carlo divergence estimation proposed (in Theorem 2) in the paper. Firstly, we verify the validity of the Taylor expansion-based argumen...

متن کامل

Variational Inference via \chi Upper Bound Minimization

Variational inference enables Bayesian analysis for complex probabilistic models with massive data sets. It works by positing a family of distributions and finding the member in the family that is closest to the posterior. While successful, variational methods can run into pathologies; for example, they typically underestimate posterior uncertainty. We propose chi-vi, a complementary algorithm ...

متن کامل

Variational Inference via Upper Bound Minimization

Variational inference (VI) is widely used as an efficient alternative to Markovchain Monte Carlo. It posits a family of approximating distributions q and findsthe closest member to the exact posterior p. Closeness is usually measured via adivergence D(q||p) from q to p. While successful, this approach also has problems.Notably, it typically leads to underestimation of the poster...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016